An empirical solution for over-pruning with a novel ensemble-learning method for fMRI decoding

نویسندگان

  • Satoshi Hirose
  • Isao Nambu
  • Eiichi Naito
چکیده

BACKGROUND Recent functional magnetic resonance imaging (fMRI) decoding techniques allow us to predict the contents of sensory and motor events or participants' mental states from multi-voxel patterns of fMRI signals. Sparse logistic regression (SLR) is a useful pattern classification algorithm that has the advantage of being able to automatically select voxels to avoid over-fitting. However, SLR suffers from over-pruning, in which many voxels that are potentially useful for prediction are discarded. NEW METHOD We propose an ensemble solution for over-pruning, called "Iterative Recycling" (iRec), in which sparse classifiers are trained iteratively by recycling over-pruned voxels. RESULTS Our simulation demonstrates that iRec can effectively rectify over-pruning in SLR and improve its classification accuracy. We also conduct an fMRI experiment in which eight healthy volunteers perform a finger-tapping task with their index or middle fingers. The results indicate that SLR with iRec (iSLR) can predict the finger used more accurately than SLR. Further, iSLR is able to identify a voxel cluster representing the finger movements in the biologically plausible contralateral primary sensory-motor cortices in each participant. We also successfully dissociated the regularly arranged representation for each finger in the cluster. CONCLUSION AND COMPARISON WITH OTHER METHODS To the best of our knowledge, ours is the first study to propose a solution for over-pruning with ensemble-learning that is applicable to any sparse algorithm. In addition, from the viewpoint of machine learning, we provide the novel idea of using the sparse classification algorithm to generate accurate divergent base classifiers.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Novel Ensemble Approach for Anomaly Detection in Wireless Sensor Networks Using Time-overlapped Sliding Windows

One of the most important issues concerning the sensor data in the Wireless Sensor Networks (WSNs) is the unexpected data which are acquired from the sensors. Today, there are numerous approaches for detecting anomalies in the WSNs, most of which are based on machine learning methods. In this research, we present a heuristic method based on the concept of “ensemble of classifiers” of data minin...

متن کامل

Ensemble Pruning Via Semi-definite Programming

An ensemble is a group of learning models that jointly solve a problem. However, the ensembles generated by existing techniques are sometimes unnecessarily large, which can lead to extra memory usage, computational costs, and occasional decreases in effectiveness. The purpose of ensemble pruning is to search for a good subset of ensemble members that performs as well as, or better than, the ori...

متن کامل

An Empirical Comparison of Pruning Methods for Ensemble Classifiers

Many researchers have shown that ensemble methods such as Boosting and Bagging improve the accuracy of classification. Boosting and Bagging perform well with unstable learning algorithms such as neural networks or decision trees. Pruning decision tree classifiers is intended to make trees simpler and more comprehensible and avoid over-fitting. However it is known that pruning individual classif...

متن کامل

Optimally Pruning Decision Tree Ensembles With Feature Cost

We consider the problem of learning decision rules for prediction with feature budget constraint. In particular, we are interested in pruning an ensemble of decision trees to reduce expected feature cost while maintaining high prediction accuracy for any test example. We propose a novel 0-1 integer program formulation for ensemble pruning. Our pruning formulation is general it takes any ensembl...

متن کامل

Cost Complexity Pruning of Ensemble Classifiers

In this paper we study methods that combine multiple classification models learned over separate data sets in a distributed database setting. Numerous studies posit that such approaches provide the means to efficiently scale learning to large datasets, while also boosting the accuracy of individual classifiers. These gains, however, come at the expense of an increased demand for run-time system...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Journal of Neuroscience Methods

دوره 239  شماره 

صفحات  -

تاریخ انتشار 2015